Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Adicionar filtros

Base de dados
Assunto principal
Ano de publicação
Tipo de documento
Intervalo de ano
1.
Nat Biomed Eng ; 7(6): 743-755, 2023 06.
Artigo em Inglês | MEDLINE | ID: covidwho-20245377

RESUMO

During the diagnostic process, clinicians leverage multimodal information, such as the chief complaint, medical images and laboratory test results. Deep-learning models for aiding diagnosis have yet to meet this requirement of leveraging multimodal information. Here we report a transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner. Rather than learning modality-specific features, the model leverages embedding layers to convert images and unstructured and structured text into visual tokens and text tokens, and uses bidirectional blocks with intramodal and intermodal attention to learn holistic representations of radiographs, the unstructured chief complaint and clinical history, and structured clinical information such as laboratory test results and patient demographic information. The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary disease (by 12% and 9%, respectively) and in the prediction of adverse clinical outcomes in patients with COVID-19 (by 29% and 7%, respectively). Unified multimodal transformer-based models may help streamline the triaging of patients and facilitate the clinical decision-making process.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico , Fontes de Energia Elétrica , Teste para COVID-19
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA